6,649 research outputs found

    The Impact of a National Science Foundation Collaborative for Excellence in Teacher Preparation on an Undergraduate Chemistry Course for Non-Chemistry Science Majors

    Get PDF
    In 1999 and 2000 Chemistry 312: Analytical Chemistry for non-chemistry science majors (taken in the junior or senior year), was revised as a result of the instructor’s involvements in the Center for Excellence in Teacher Preparation project and an NSF equipment grant. Changes included the introduction of a K-12 teaching requirement, more emphasis on co-operative learning and on inquiry-based exercises. These latter two pedagogical practices had more impact on the laboratory activities than on the classroom activities. Students in the laboratory were assigned defined roles in the groups and all groups undertook a three-week research project. Students’ responses to the teaching requirement were (with a few exceptions in a class of over forty) positive, and several students identified themselves as future teachers. Responses to the group work associated with the laboratory and several homework exercises were less uniformly positive, with a significant number of students articulating a concern that their grades were compromised by the presence of weaker students in the groups. The grades awarded, the overall percentages and the exam scores of the students were compared for the years 1998, 1999, and 2000. There was a significant improvement in the overall percentages (and the exam scores) between 1998 and 1999, and between 1998 and 2000. Had the thresholds for the awarding of letter grades not been increased for 2000, there would have been 31 A’s awarded to the 44 students who completed the course

    Optical Synoptic Telescopes: New Science Frontiers

    Full text link
    Over the past decade, sky surveys such as the Sloan Digital Sky Survey have proven the power of large data sets for answering fundamental astrophysical questions. This observational progress, based on a synergy of advances in telescope construction, detectors, and information technology, has had a dramatic impact on nearly all fields of astronomy, and areas of fundamental physics. The next-generation instruments, and the surveys that will be made with them, will maintain this revolutionary progress. The hardware and computational technical challenges and the exciting science opportunities are attracting scientists and engineers from astronomy, optics, low-light-level detectors, high-energy physics, statistics, and computer science. The history of astronomy has taught us repeatedly that there are surprises whenever we view the sky in a new way. This will be particularly true of discoveries emerging from a new generation of sky surveys. Imaging data from large ground-based active optics telescopes with sufficient etendue can address many scientific missions simultaneously. These new investigations will rely on the statistical precision obtainable with billions of objects. For the first time, the full sky will be surveyed deep and fast, opening a new window on a universe of faint moving and distant exploding objects as well as unraveling the mystery of dark energy.Comment: 12 pages, 7 figure

    Sizing the European Shadow Banking System: A New Methodology

    Get PDF
    One of the critical unanswered questions relating to the shadow-banking system has been to quantify its scale in an industry where entities, by design, are opaque and often outside of regulated and publically shared frameworks. However almost all shadow banking entities, including hedge funds, private equity funds and special purpose vehicles ("SPVs"), interact with the financial markets via regulated investment banks. For example, many SPVs are in fact originated as part of investment banking business and hedge funds typically transact in financial markets exclusively via the "prime brokerage" division of investment banks. This interface with the regulated banking environment combines with the typical practise by investment banks of equalizing compensation (Including bonus) ratios to revenues globally which then allows identification of the implied difference in revenues and hence assets that represents the shadow banking system. The paper will present for critique the results of this methodology to estimate the UK shadow banking system including European business managed from the UK. The estimate will imply a larger scale of shadow banking than previous estimates at £548 billion which, when combined with hedge fund assets of £360 billion (FSA, 2011) gives total shadow banking assets of over £900 billion. It is proposed that the large gap between the estimates of this paper and other estimates reflects the huge, and previously unknown, scale of offshore activities of UK investment banks

    Management of a Capital Stock by Strotz's Naive Planner

    Get PDF
    A generalized version of the capital management problem posed in a classic paper by R. H. Strotz is analyzed for the case of the "naive" planner who fails to anticipate any impending change in his own preferences. By imposing progressively stronger restrictions on the primitives of the problem --- namely, the planner's discounting function, his utility index function, and the investment technology --- the path of the capital stock is characterized first implicitly as the solution to a differential equation and then explicitly via formulae that may or may not be expressible in closed form. Inasmuch as this procedure turns out to leave the discounting function essentially unrestricted, the theory can accommodate, in particular, decision makers who discount time according to the type of hyperbolic curve said to be suggested by psychological studies. Strategies for numerical computation of capital paths are discussed and are demonstrated in sample planning problems.consumption, computation, hyperbolic discounting, time preference.

    Cognitive Constraints, Contraction Consistency, and the Satisficing Criterion

    Get PDF
    A theory of decision making is proposed that offers an axiomatic basis for the notion of "satisficing" postulated by Herbert Simon. The theory relaxes the standard assumption that the decision maker always fully perceives his preferences among the available alternatives, requiring instead that his ability to perceive any given preference be decreasing with respect to the complexity of the choice problem at hand. When complexity is aligned with set inclusion, this exercise is shown to be equivalent to abandoning the contraction consistency axiom of classical choice theory.Choice function, Perception, Revealed preference, Threshold

    Management of a Capital Stock by Strotz's Naive Planner

    Get PDF
    The capital management problem posed by R. H. Strotz is analyzed for the case of the "naive" planner who fails to anticipate changes in his own preferences. By imposing progressively stronger restrictions on the primitives of the problem - namely, the discounting function, the utility index function, and the investment technology - the planner's behavior is characterized first as the solution to an ordinary differential equation and then via explicit formulae. Inasmuch as these characterizations leave the discounting function essentially unrestricted, the theory can accommodate, in particular, decision makers who discount time according to the hyperbolic and "quasi-hyperbolic" curves used in applied work and said to be supported by psychological studies. Comparative statics of the model are discussed, as are extensions of the analysis to allow for credit constraints, limited foresight, and partial commitment.Consumption, Commitment, Hyperbolic discounting, Time preference

    Bifurcation analysis of a model of the budding yeast cell cycle

    Full text link
    We study the bifurcations of a set of nine nonlinear ordinary differential equations that describe the regulation of the cyclin-dependent kinase that triggers DNA synthesis and mitosis in the budding yeast, Saccharomyces cerevisiae. We show that Clb2-dependent kinase exhibits bistability (stable steady states of high or low kinase activity). The transition from low to high Clb2-dependent kinase activity is driven by transient activation of Cln2-dependent kinase, and the reverse transition is driven by transient activation of the Clb2 degradation machinery. We show that a four-variable model retains the main features of the nine-variable model. In a three-variable model exhibiting birhythmicity (two stable oscillatory states), we explore possible effects of extrinsic fluctuations on cell cycle progression.Comment: 31 pages,13 figure

    Measurement of the Mass Profile of Abell 1689

    Full text link
    In this letter we present calibrated mass and light profiles of the rich cluster of galaxies Abell 1689 out to 1 h1h^{-1} Mpc from the center. The high surface density of faint blue galaxies at high redshift, selected by their low surface brightness, are unique tools for mapping the projected mass distribution of foreground mass concentrations. The systematic gravitational lens distortions of 10410^4 of these background galaxies in 15\arcmin\ fields reveal detailed mass profiles for intervening clusters of galaxies, and are a direct measure of the growth of mass inhomogeneity. The mass is measured directly, avoiding uncertainties encountered in velocity or X-ray derived mass estimates. Mass in the rich cluster Abell 1689 follows smoothed light, outside 100 h1^{-1} kpc, with a rest-frame V band mass-to-light ratio of 400±60400 \pm 60 h1(M/LV)h^{-1} (M/L_V)_\odot. Near the cluster center, mass appears to be more smoothly distributed than light. Out to a radius of 1 h1h^{-1} Mpc the total mass follows a steeper than isothermal profile. Comparing with preliminary high resolution N-body clustering simulations for various cosmogonies on these scales, these data are incompatible with hot dark matter, a poor fit to most mixed dark matter models, and favor open or Λ>0\Lambda > 0 cold dark matter. Substructure is seen in both the mass and the light, but detailed correspondence is erased on scales less than 100 h1h^{-1} kpc.Comment: 13 pages, uuencoded, compressed postscript file, 2 figures included additional 1Mbyte figure available on request. Only change is that in original errorbars on Fig. 5 were a factor of 2 too big

    Separating Gravitational Wave Signals from Instrument Artifacts

    Get PDF
    Central to the gravitational wave detection problem is the challenge of separating features in the data produced by astrophysical sources from features produced by the detector. Matched filtering provides an optimal solution for Gaussian noise, but in practice, transient noise excursions or ``glitches'' complicate the analysis. Detector diagnostics and coincidence tests can be used to veto many glitches which may otherwise be misinterpreted as gravitational wave signals. The glitches that remain can lead to long tails in the matched filter search statistics and drive up the detection threshold. Here we describe a Bayesian approach that incorporates a more realistic model for the instrument noise allowing for fluctuating noise levels that vary independently across frequency bands, and deterministic ``glitch fitting'' using wavelets as ``glitch templates'', the number of which is determined by a trans-dimensional Markov chain Monte Carlo algorithm. We demonstrate the method's effectiveness on simulated data containing low amplitude gravitational wave signals from inspiraling binary black hole systems, and simulated non-stationary and non-Gaussian noise comprised of a Gaussian component with the standard LIGO/Virgo spectrum, and injected glitches of various amplitude, prevalence, and variety. Glitch fitting allows us to detect significantly weaker signals than standard techniques.Comment: 21 pages, 18 figure
    corecore